Goto

Collaborating Authors

 research direction



Model-Driven Quantum Code Generation Using Large Language Models and Retrieval-Augmented Generation

Siavash, Nazanin, Moin, Armin

arXiv.org Artificial Intelligence

This paper introduces a novel research direction for model-to-text/code transformations by leveraging Large Language Models (LLMs) that can be enhanced with Retrieval-Augmented Generation (RAG) pipelines. The focus is on quantum and hybrid quantum-classical software systems, where model-driven approaches can help reduce the costs and mitigate the risks associated with the heterogeneous platform landscape and lack of developers' skills. We validate one of the proposed ideas regarding generating code out of UML model instances of software systems. This Python code uses a well-established library, called Qiskit, to execute on gate-based or circuit-based quantum computers. The RAG pipeline that we deploy incorporates sample Qiskit code from public GitHub repositories. Experimental results show that well-engineered prompts can improve CodeBLEU scores by up to a factor of four, yielding more accurate and consistent quantum code. However, the proposed research direction can go beyond this through further investigation in the future by conducting experiments to address our other research questions and ideas proposed here, such as deploying software system model instances as the source of information in the RAG pipelines, or deploying LLMs for code-to-code transformations, for instance, for transpilation use cases.


Leveraging AI for Productive and Trustworthy HPC Software: Challenges and Research Directions

Teranishi, Keita, Menon, Harshitha, Godoy, William F., Balaprakash, Prasanna, Bau, David, Ben-Nun, Tal, Bhatele, Abhinav, Franchetti, Franz, Franusich, Michael, Gamblin, Todd, Georgakoudis, Giorgis, Goldstein, Tom, Guha, Arjun, Hahn, Steven, Iancu, Costin, Jin, Zheming, Jones, Terry, Low, Tze Meng, Mankad, Het, Miniskar, Narasinga Rao, Monil, Mohammad Alaul Haque, Nichols, Daniel, Parasyris, Konstantinos, Pophale, Swaroop, Valero-Lara, Pedro, Vetter, Jeffrey S., Williams, Samuel, Young, Aaron

arXiv.org Artificial Intelligence

We discuss the challenges and propose research directions for using AI to revolutionize the development of high-performance computing (HPC) software. AI technologies, in particular large language models, have transformed every aspect of software development. For its part, HPC software is recognized as a highly specialized scientific field of its own. We discuss the challenges associated with leveraging state-of-the-art AI technologies to develop such a unique and niche class of software and outline our research directions in the two US Department of Energy--funded projects for advancing HPC Software via AI: Ellora and Durban.


Looking Forward: Challenges and Opportunities in Agentic AI Reliability

Xing, Liudong, Janet, null, Lin, null

arXiv.org Artificial Intelligence

The AI conversation can be traced as far back as Alan Turing's milestone paper published in 1950, which considered the fundamental question "Can machines think?" [1]. In 1956, AI got its name and mission as a scientific field at the first AI conference held at Dartmouth College [2]. Following AI's foundational period in the 1950s ~ 1970s, AI has evolved from early rule-based systems (1970s ~ 1990s), through classical machine learning and deep learning with neural networks (1990s ~ 2020s), to today's generative and agentic AI systems (since 2010s). Correspondingly, as a vital requirement of these systems, the reliability concept and concerns are also evolving, particularly in the interpretation of "required function" (see Table 1 in Chapter 10), based on the definition in standards like ISO 8402 "The ability of an item to perform a required function, under given environmental and operational conditions and for a stated period of time ". While a conventional AI system is concerned with providing stable and accurate classifications, predictions, or optimizations, a reliable generative AI system focuses on producing outputs that are trustworthy, consistent, safe, and contextually appropriate [3]. Building on both, a reliable agentic AI system should additionally conduct functions of reasoning, goal alignment, planning, safe adaption and interaction in dynamic and collaborative multi-agent contexts. The expansion of reliability concepts has introduced new challenges and research opportunities, as exemplified in Figure 1. In the following sections, we shed lights on these challenges and opportunities in building reliable AI systems, particularly, agentic AI systems.


Pinching Antennas Meet AI in Next-Generation Wireless Networks

Fang, Fang, Ding, Zhiguo, Leung, Victor C. M., Hanzo, Lajos

arXiv.org Artificial Intelligence

Abstract--Next-generation (NG) wireless networks must embrace innate intelligence in support of demanding emerging applications, such as extended reality and autonomous systems, under ultra-reliable and low-latency requirements. Pinching antennas (PAs), a new flexible low-cost technology, can create line-of-sight links by dynamically activating small dielectric pinches along a waveguide on demand. As a compelling complement, artificial intelligence (AI) offers the intelligence needed to manage the complex control of PA activation positions and resource allocation in these dynamic environments. This article explores the'win-win' cooperation between AI and PAs: AI facilitates the adaptive optimization of PA activation positions along the waveguide, while PAs support edge AI tasks such as federated learning and over-the-air aggregation. We also discuss promising research directions including large language model-driven PA control frameworks, and how PA-AI integration can advance semantic communications, and integrated sensing and communication. This synergy paves the way for adaptive, resilient, and self-optimizing NG networks. Next-generation (NG) wireless systems are expected to provide ultra-high data rates, massive connectivity, and ubiquitous intelligence. However, meeting these radical demands requires overcoming severe propagation losses and blockage for creating near line-of-sight (LoS) links. Recently, pinching antennas (P As) have emerged as a flexible antenna technology for creating LoS links on demand [1].


Exploring the Feasibility of End-to-End Large Language Model as a Compiler

Zhang, Hongbin, Gao, Shihao, Liu, Yang, Xing, Mingjie, Wu, Yanjun, Zhao, Chen

arXiv.org Artificial Intelligence

In recent years, end-to-end Large Language Model (LLM) technology has shown substantial advantages across various domains. As critical system software and infrastructure, compilers are responsible for transforming source code into target code. While LLMs have been leveraged to assist in compiler development and maintenance, their potential as an end-to-end compiler remains largely unexplored. This paper explores the feasibility of LLM as a Compiler (LaaC) and its future directions. We designed the CompilerEval dataset and framework specifically to evaluate the capabilities of mainstream LLMs in source code comprehension and assembly code generation. In the evaluation, we analyzed various errors, explored multiple methods to improve LLM-generated code, and evaluated cross-platform compilation capabilities. Experimental results demonstrate that LLMs exhibit basic capabilities as compilers but currently achieve low compilation success rates. By optimizing prompts, scaling up the model, and incorporating reasoning methods, the quality of assembly code generated by LLMs can be significantly enhanced. Based on these findings, we maintain an optimistic outlook for LaaC and propose practical architectural designs and future research directions. We believe that with targeted training, knowledge-rich prompts, and specialized infrastructure, LaaC has the potential to generate high-quality assembly code and drive a paradigm shift in the field of compilation.


Deep Ideation: Designing LLM Agents to Generate Novel Research Ideas on Scientific Concept Network

Zhao, Keyu, Lin, Weiquan, Zheng, Qirui, Xu, Fengli, Li, Yong

arXiv.org Artificial Intelligence

Novel research ideas play a critical role in advancing scientific inquiries. Recent advancements in Large Language Models (LLMs) have demonstrated their potential to generate novel research ideas by leveraging large-scale scientific literature. However, previous work in research ideation has primarily relied on simplistic methods, such as keyword co-occurrence or semantic similarity. These approaches focus on identifying statistical associations in the literature but overlook the complex, contextual relationships between scientific concepts, which are essential to effectively leverage knowledge embedded in human literature. For instance, papers that simultaneously mention "keyword A" and "keyword B" often present research ideas that integrate both concepts. Additionally, some LLM-driven methods propose and refine research ideas using the model's internal knowledge, but they fail to effectively utilize the scientific concept network, limiting the grounding of ideas in established research. To address these challenges, we propose the Deep Ideation framework to address these challenges, integrating a scientific network that captures keyword co-occurrence and contextual relationships, enriching LLM-driven ideation. The framework introduces an explore-expand-evolve workflow to iteratively refine research ideas, using an Idea Stack to track progress. A critic engine, trained on real-world reviewer feedback, guides the process by providing continuous feedback on the novelty and feasibility of ideas. Our experiments show that our approach improves the quality of generated ideas by 10.67% compared to other methods, with ideas surpassing top conference acceptance levels. Human evaluation highlights their practical value in scientific research, and ablation studies confirm the effectiveness of each component in the workflow. Code repo is available at https://github.com/kyZhao-1/Deep-Ideation.


What's the next frontier for Data-centric AI? Data Savvy Agents

Seedat, Nabeel, Liu, Jiashuo, van der Schaar, Mihaela

arXiv.org Artificial Intelligence

The recent surge in AI agents that autonomously communicate, collaborate with humans and use diverse tools has unlocked promising opportunities in various real-world settings. However, a vital aspect remains underexplored: how agents handle data. Scalable autonomy demands agents that continuously acquire, process, and evolve their data. In this paper, we argue that data-savvy capabilities should be a top priority in the design of agentic systems to ensure reliable real-world deployment. Specifically, we propose four key capabilities to realize this vision: (1) Proactive data acquisition: enabling agents to autonomously gather task-critical knowledge or solicit human input to address data gaps; (2) Sophisticated data processing: requiring context-aware and flexible handling of diverse data challenges and inputs; (3) Interactive test data synthesis: shifting from static benchmarks to dynamically generated interactive test data for agent evaluation; and (4) Continual adaptation: empowering agents to iteratively refine their data and background knowledge to adapt to shifting environments. While current agent research predominantly emphasizes reasoning, we hope to inspire a reflection on the role of data-savvy agents as the next frontier in data-centric AI.


Pedagogy-driven Evaluation of Generative AI-powered Intelligent Tutoring Systems

Maurya, Kaushal Kumar, Kochmar, Ekaterina

arXiv.org Artificial Intelligence

The interdisciplinary research domain of Artificial Intelligence in Education (AIED) has a long history of developing Intelligent Tutoring Systems (ITSs) by integrating insights from technological advancements, educational theories, and cognitive psychology. The remarkable success of generative AI (GenAI) models has accelerated the development of large language model (LLM)-powered ITSs, which have potential to imitate human-like, pedagogically rich, and cognitively demanding tutoring. However, the progress and impact of these systems remain largely untraceable due to the absence of reliable, universally accepted, and pedagogy-driven evaluation frameworks and benchmarks. Most existing educational dialogue-based ITS evaluations rely on subjective protocols and non-standardized benchmarks, leading to inconsistencies and limited generalizability. In this work, we take a step back from mainstream ITS development and provide comprehensive state-of-the-art evaluation practices, highlighting associated challenges through real-world case studies from careful and caring AIED research. Finally, building on insights from previous interdisciplinary AIED research, we propose three practical, feasible, and theoretically grounded research directions, rooted in learning science principles and aimed at establishing fair, unified, and scalable evaluation methodologies for ITSs.


comments. Reviewer # 1 wants to see an algorithm that works when b

Neural Information Processing Systems

We thank all the reviewers for their time and valuable comments. "Provide an algorithm to output a distribution that's close to the target, even if b has negative components." We will mention this in the paper. This is an interesting direction for future research. "What happens when we increase the number of layers?"